library(tidyverse) # for data cleaning and plotting
## -- Attaching packages --------------------------------------- tidyverse 1.3.0 --
## v ggplot2 3.3.2 v purrr 0.3.4
## v tibble 3.0.4 v dplyr 1.0.2
## v tidyr 1.1.2 v stringr 1.4.0
## v readr 1.4.0 v forcats 0.5.0
## -- Conflicts ------------------------------------------ tidyverse_conflicts() --
## x dplyr::filter() masks stats::filter()
## x dplyr::lag() masks stats::lag()
library(lubridate) # for date manipulation
##
## Attaching package: 'lubridate'
## The following objects are masked from 'package:base':
##
## date, intersect, setdiff, union
library(openintro) # for the abbr2state() function
## Loading required package: airports
## Loading required package: cherryblossom
## Loading required package: usdata
library(palmerpenguins)# for Palmer penguin data
library(maps) # for map data
##
## Attaching package: 'maps'
## The following object is masked from 'package:purrr':
##
## map
library(ggmap) # for mapping points on maps
## Google's Terms of Service: https://cloud.google.com/maps-platform/terms/.
## Please cite ggmap if you use it! See citation("ggmap") for details.
library(gplots) # for col2hex() function
##
## Attaching package: 'gplots'
## The following object is masked from 'package:stats':
##
## lowess
library(RColorBrewer) # for color palettes
library(sf) # for working with spatial data
## Linking to GEOS 3.8.0, GDAL 3.0.4, PROJ 6.3.1
library(leaflet) # for highly customizable mapping
library(carData) # for Minneapolis police stops data
library(ggthemes) # for more themes (including theme_map())
theme_set(theme_minimal())
# Starbucks locations
Starbucks <- read_csv("https://www.macalester.edu/~ajohns24/Data/Starbucks.csv")
##
## -- Column specification --------------------------------------------------------
## cols(
## Brand = col_character(),
## `Store Number` = col_character(),
## `Store Name` = col_character(),
## `Ownership Type` = col_character(),
## `Street Address` = col_character(),
## City = col_character(),
## `State/Province` = col_character(),
## Country = col_character(),
## Postcode = col_character(),
## `Phone Number` = col_character(),
## Timezone = col_character(),
## Longitude = col_double(),
## Latitude = col_double()
## )
starbucks_us_by_state <- Starbucks %>%
filter(Country == "US") %>%
count(`State/Province`) %>%
mutate(state_name = str_to_lower(abbr2state(`State/Province`)))
# Lisa's favorite St. Paul places - example for you to create your own data
favorite_stp_by_lisa <- tibble(
place = c("Home", "Macalester College", "Adams Spanish Immersion",
"Spirit Gymnastics", "Bama & Bapa", "Now Bikes",
"Dance Spectrum", "Pizza Luce", "Brunson's"),
long = c(-93.1405743, -93.1712321, -93.1451796,
-93.1650563, -93.1542883, -93.1696608,
-93.1393172, -93.1524256, -93.0753863),
lat = c(44.950576, 44.9378965, 44.9237914,
44.9654609, 44.9295072, 44.9436813,
44.9399922, 44.9468848, 44.9700727)
)
#COVID-19 data from the New York Times
covid19 <- read_csv("https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-states.csv")
##
## -- Column specification --------------------------------------------------------
## cols(
## date = col_date(format = ""),
## state = col_character(),
## fips = col_character(),
## cases = col_double(),
## deaths = col_double()
## )
If you were not able to get set up on GitHub last week, go here and get set up first. Then, do the following (if you get stuck on a step, don’t worry, I will help! You can always get started on the homework and we can figure out the GitHub piece later):
keep_md: TRUE in the YAML heading. The .md file is a markdown (NOT R Markdown) file that is an interim step to creating the html file. They are displayed fairly nicely in GitHub, so we want to keep it and look at it there. Click the boxes next to these two files, commit changes (remember to include a commit message), and push them (green up arrow).Put your name at the top of the document.
For ALL graphs, you should include appropriate labels.
Feel free to change the default theme, which I currently have set to theme_minimal().
Use good coding practice. Read the short sections on good code with pipes and ggplot2. This is part of your grade!
When you are finished with ALL the exercises, uncomment the options at the top so your document looks nicer. Don’t do it before then, or else you might miss some important warnings and messages.
These exercises will reiterate what you learned in the “Mapping data with R” tutorial. If you haven’t gone through the tutorial yet, you should do that first.
ggmap)Starbucks locations to a world map. Add an aesthetic to the world map that sets the color of the points according to the ownership type. What, if anything, can you deduce from this visualization?world <- get_stamenmap(
bbox = c(left = -180, bottom = -57, right = 179, top = 82.1),
maptype = "terrain",
zoom = 2)
## Source : http://tile.stamen.com/terrain/2/0/0.png
## Source : http://tile.stamen.com/terrain/2/1/0.png
## Source : http://tile.stamen.com/terrain/2/2/0.png
## Source : http://tile.stamen.com/terrain/2/3/0.png
## Source : http://tile.stamen.com/terrain/2/0/1.png
## Source : http://tile.stamen.com/terrain/2/1/1.png
## Source : http://tile.stamen.com/terrain/2/2/1.png
## Source : http://tile.stamen.com/terrain/2/3/1.png
## Source : http://tile.stamen.com/terrain/2/0/2.png
## Source : http://tile.stamen.com/terrain/2/1/2.png
## Source : http://tile.stamen.com/terrain/2/2/2.png
## Source : http://tile.stamen.com/terrain/2/3/2.png
ggmap(world) +
geom_point(data = Starbucks,
aes(x = Longitude, y = Latitude, color= Starbucks$`Ownership Type`),
alpha = .3,
size = .1) +
theme_map()+
labs(title = "Starbucks locations by Ownership Type")+
guides(color = guide_legend(override.aes = list(size=5)))
## Warning: Removed 1 rows containing missing values (geom_point).
I can see that almost all the locations in the United States are company owned, while in Europe and Asia, there are more Joint Ventures and Licensed locations. There are very few Franchise locations.
world <- get_stamenmap(
bbox = c(left = -93.72, bottom = 44.69, right = -92.45, top = 45.23),
maptype = "terrain",
zoom = 10)
## Source : http://tile.stamen.com/terrain/10/245/367.png
## Source : http://tile.stamen.com/terrain/10/246/367.png
## Source : http://tile.stamen.com/terrain/10/247/367.png
## Source : http://tile.stamen.com/terrain/10/248/367.png
## Source : http://tile.stamen.com/terrain/10/249/367.png
## Source : http://tile.stamen.com/terrain/10/245/368.png
## Source : http://tile.stamen.com/terrain/10/246/368.png
## Source : http://tile.stamen.com/terrain/10/247/368.png
## Source : http://tile.stamen.com/terrain/10/248/368.png
## Source : http://tile.stamen.com/terrain/10/249/368.png
## Source : http://tile.stamen.com/terrain/10/245/369.png
## Source : http://tile.stamen.com/terrain/10/246/369.png
## Source : http://tile.stamen.com/terrain/10/247/369.png
## Source : http://tile.stamen.com/terrain/10/248/369.png
## Source : http://tile.stamen.com/terrain/10/249/369.png
ggmap(world) +
geom_point(data = Starbucks,
aes(x = Longitude, y = Latitude),
alpha = .3,
size = .1) +
theme_map()+
labs(title = "Starbucks locations in th Twin Cities Area")
## Warning: Removed 25455 rows containing missing values (geom_point).
world <- get_stamenmap(
bbox = c(left = -93.72, bottom = 44.69, right = -92.45, top = 45.23),
maptype = "terrain",
zoom = 8)
## Source : http://tile.stamen.com/terrain/8/61/91.png
## Source : http://tile.stamen.com/terrain/8/62/91.png
## Source : http://tile.stamen.com/terrain/8/61/92.png
## Source : http://tile.stamen.com/terrain/8/62/92.png
ggmap(world) +
geom_point(data = Starbucks,
aes(x = Longitude, y = Latitude),
alpha = .3,
size = .1) +
theme_map()+
labs(title = "Starbucks locations by Ownership Type")
## Warning: Removed 25455 rows containing missing values (geom_point).
The zoom determines how much detail is on the map. The larger the size of area covered, the smaller the zoom number needs to be. That’s why if I make the zoom 8 instead of 10 on a map of the Twin Cities(a relatively small chink of the world) the level of detail decreases.
get_stamenmap() in help and look at maptype). Include a map with one of the other map types.world <- get_stamenmap(
bbox = c(left = -93.72, bottom = 44.69, right = -92.45, top = 45.23),
maptype = "toner",
zoom = 10)
## Source : http://tile.stamen.com/toner/10/245/367.png
## Source : http://tile.stamen.com/toner/10/246/367.png
## Source : http://tile.stamen.com/toner/10/247/367.png
## Source : http://tile.stamen.com/toner/10/248/367.png
## Source : http://tile.stamen.com/toner/10/249/367.png
## Source : http://tile.stamen.com/toner/10/245/368.png
## Source : http://tile.stamen.com/toner/10/246/368.png
## Source : http://tile.stamen.com/toner/10/247/368.png
## Source : http://tile.stamen.com/toner/10/248/368.png
## Source : http://tile.stamen.com/toner/10/249/368.png
## Source : http://tile.stamen.com/toner/10/245/369.png
## Source : http://tile.stamen.com/toner/10/246/369.png
## Source : http://tile.stamen.com/toner/10/247/369.png
## Source : http://tile.stamen.com/toner/10/248/369.png
## Source : http://tile.stamen.com/toner/10/249/369.png
ggmap(world) +
geom_point(data = Starbucks,
aes(x = Longitude, y = Latitude, color="red"),
alpha = .3,
size = .1) +
theme_map()+
labs(title = "Starbucks locations in the Twin Cities Area")
## Warning: Removed 25455 rows containing missing values (geom_point).
annotate() function (see ggplot2 cheatsheet).world <- get_stamenmap(
bbox = c(left = -93.72, bottom = 44.69, right = -92.45, top = 45.23),
maptype = "terrain",
zoom = 10)
ggmap(world) +
geom_point(data = Starbucks,
aes(x = Longitude, y = Latitude),
alpha = .3,
size = .1) +
theme_map()+
annotate("point", x=-93.17, y=44.93, color="red")+
annotate("text", x=-93.17, y=44.945, label= "Mac", color="red")+
labs(title = "Starbucks locations by Ownership Type")
## Warning: Removed 25455 rows containing missing values (geom_point).
geom_map())The example I showed in the tutorial did not account for population of each state in the map. In the code below, a new variable is created, starbucks_per_10000, that gives the number of Starbucks per 10,000 people. It is in the starbucks_with_2018_pop_est dataset.
census_pop_est_2018 <- read_csv("https://www.dropbox.com/s/6txwv3b4ng7pepe/us_census_2018_state_pop_est.csv?dl=1") %>%
separate(state, into = c("dot","state"), extra = "merge") %>%
select(-dot) %>%
mutate(state = str_to_lower(state))
##
## -- Column specification --------------------------------------------------------
## cols(
## state = col_character(),
## est_pop_2018 = col_double()
## )
starbucks_with_2018_pop_est <-
starbucks_us_by_state %>%
left_join(census_pop_est_2018,
by = c("state_name" = "state")) %>%
mutate(starbucks_per_10000 = (n/est_pop_2018)*10000)
dplyr review: Look through the code above and describe what each line of code does.The first line reads in the census data of estemated population from 2018 from the dropbox link. The next line separates the period and the state name from the variable state. It creates a two variables (dot and state). Dot contains just a period, thus the next line deselcts that variable because it’s not relative whatsoever. The next line makes all the characters in the state’s name lowercase. In the next chunk, we create a new data set called starbuck_with_2018_pop_est by left joining the current starbucks dataset and the census data set. The two are joined by each data sets respective variables for state name. Finally, we create a new variable that takes the number of starbucks per population in each state and divides it by 10000 to get number of starbucks per 10000 people in each state.
states_map <- map_data("state")
starbucks_with_2018_pop_est %>%
ggplot() +
geom_map(map = states_map,
aes(map_id = state_name,
fill = starbucks_per_10000)) +
geom_point(data = Starbucks %>%
filter(`Country` == "US") %>%
filter(`State/Province` != "AK") %>%
filter(`State/Province` != "HI"),
aes(x = Longitude, y = Latitude),
size = .05,
alpha = .2,
color = "goldenrod") +
expand_limits(x = states_map$long, y = states_map$lat) +
theme_map()+
labs(title = "Number of Starbucks per 10000 people", caption = "Ben Wagner made this plot")
There seems to be a lot of starbucks locations along the West Coast and in turn the number of starbucks per 10000 people is higher than other parts of the country. There also seems like a lot of locations along the east coast, however there are a lot of people living in that area, thus the number of locations per 10000 is going to be less.
leaflet)tibble() function that has 10-15 rows of your favorite places. The columns will be the name of the location, the latitude, the longitude, and a column that indicates if it is in your top 3 favorite locations or not. For an example of how to use tibble(), look at the favorite_stp_by_lisa I created in the data R code chunk at the beginning.favorite_place_ben <- tibble(
place = c("Home", "Macalester College", "Disney World",
"St. Paul House", "Lake Placid", "Flight Club",
"Cafe Astoria", "New Smyrna Beach", "Duff's", "Groveland Ice Rink"),
long = c(-78.82, -93.1712321, -81.5639,
-93.1503, -73.9799, -73.982507,
-93.10759, -80.9270, -78.798355, -93.1850535),
lat = c(42.742561, 44.9378965, 28.3852,
44.933541, 44.2795, 40.751295,
44.941177, 29.0258, 42.979141, 44.9346780),
top_three = c(TRUE, FALSE, FALSE,
TRUE, FALSE, FALSE,
FALSE, FALSE, TRUE, FALSE)
)
Create a leaflet map that uses circles to indicate your favorite places. Label them with the name of the place. Choose the base map you like best. Color your 3 favorite places differently than the ones that are not in your top 3 (HINT: colorFactor()). Add a legend that explains what the colors mean.
Connect all your locations together with a line in a meaningful way (you may need to order them differently in the original data).
If there are other variables you want to add that could enhance your plot, do that now.
pal <- colorFactor("viridis",
domain = favorite_place_ben$top_three)
favorite_place_ben<- favorite_place_ben %>%
arrange(desc(long))
leaflet(data = favorite_place_ben) %>%
addProviderTiles(providers$OpenStreetMap) %>%
addCircles(lng = ~long,
lat = ~lat,
label = ~place,
weight = 10,
opacity = 1,
color = ~pal(top_three)) %>%
addPolylines(lng = ~long,
lat = ~lat,
color = col2hex("red")) %>%
addLegend(pal = pal,
values = ~top_three,
opacity = 0.5,
title = "Location is in my Top 3",
position = "bottomright")
This section will revisit some datasets we have used previously and bring in a mapping component.
The data come from Washington, DC and cover the last quarter of 2014.
Two data tables are available:
Trips contains records of individual rentalsStations gives the locations of the bike rental stationsHere is the code to read in the data. We do this a little differently than usualy, which is why it is included here rather than at the top of this file. To avoid repeatedly re-reading the files, start the data import chunk with {r cache = TRUE} rather than the usual {r}. This code reads in the large dataset right away.
data_site <-
"https://www.macalester.edu/~dshuman1/data/112/2014-Q4-Trips-History-Data.rds"
Trips <- readRDS(gzcon(url(data_site)))
Stations<-read_csv("http://www.macalester.edu/~dshuman1/data/112/DC-Stations.csv")
##
## -- Column specification --------------------------------------------------------
## cols(
## name = col_character(),
## lat = col_double(),
## long = col_double(),
## nbBikes = col_double(),
## nbEmptyDocks = col_double()
## )
Stations to make a visualization of the total number of departures from each station in the Trips data. Use either color or size to show the variation in number of departures. This time, plot the points on top of a map. Use any of the mapping tools you’d like.dep_by_station <-Trips %>%
group_by(sstation) %>%
summarize(n=n()) %>%
left_join(Stations,
by = c("sstation"="name"))
## `summarise()` ungrouping output (override with `.groups` argument)
Wash_map <- get_stamenmap(
bbox = c(left = -77.4178, bottom = 38.77, right = -76.3406, top = 39.15),
maptype = "terrain",
zoom = 10)
## Source : http://tile.stamen.com/terrain/10/291/390.png
## Source : http://tile.stamen.com/terrain/10/292/390.png
## Source : http://tile.stamen.com/terrain/10/293/390.png
## Source : http://tile.stamen.com/terrain/10/294/390.png
## Source : http://tile.stamen.com/terrain/10/291/391.png
## Source : http://tile.stamen.com/terrain/10/292/391.png
## Source : http://tile.stamen.com/terrain/10/293/391.png
## Source : http://tile.stamen.com/terrain/10/294/391.png
## Source : http://tile.stamen.com/terrain/10/291/392.png
## Source : http://tile.stamen.com/terrain/10/292/392.png
## Source : http://tile.stamen.com/terrain/10/293/392.png
## Source : http://tile.stamen.com/terrain/10/294/392.png
ggmap(Wash_map)+
geom_point(data = dep_by_station,
aes(x = long, y = lat, color= n),
alpha = 1,
size = 1.5) +
theme_map()+
labs(title = "Number of departures by station")
## Warning: Removed 12 rows containing missing values (geom_point).
dep_by_station2 <-Trips %>%
group_by(sstation) %>%
mutate(is_casual= client %in% "Casual") %>%
mutate(more_casual= sum(is_casual=="TRUE")/sum(is_casual == "FALSE")) %>%
filter(more_casual > 0.5) %>%
left_join(Stations,
by = c("sstation"="name"))
Wash_map <- get_stamenmap(
bbox = c(left = -77.4178, bottom = 38.77, right = -76.3406, top = 39.15),
maptype = "terrain",
zoom = 10)
ggmap(Wash_map)+
geom_point(data = dep_by_station2,
aes(x = long, y = lat),
alpha = 1,
size = 1.5,
color = "red") +
theme_map()+
labs(title = "Stations with high number of casual riders")
As you can see from the map, stations that have more casual riders are located closer to the middle of Washington D.C. This is expected because of how much D.C. is a draw to tourist who are most likely casual bikers. It’s interesting that there are a lot of casual riders near Darnestown.
The following exercises will use the COVID-19 data from the NYT.
covid19 <-covid19 %>%
mutate(state_name = str_to_lower(state))
states_map <- map_data("state")
covid19 %>%
arrange(desc(date)) %>%
ggplot() +
geom_map(map = states_map,
aes(map_id = state_name,
fill = cases)) +
#This assures the map looks decently nice:
expand_limits(x = states_map$long, y = states_map$lat) +
theme_map()+
labs(title = "Most recent number of cases in each state by color")
I see that California, Texas, New York, and Florida are all the highest amount of cases, mostly caused by high population and how the state handled the pandemic. What’s wrong with this map is that we can only estimate how many cases there actually are because color is a hard eay to represent numbers effectively.
covid19 <-covid19 %>%
mutate(state_name = str_to_lower(state))
states_map <- map_data("state")
covid19 %>%
arrange(desc(date)) %>%
left_join(census_pop_est_2018,
by = c("state_name" = "state")) %>%
mutate(cases_per_10000 = (cases/est_pop_2018)*10000) %>%
ggplot() +
geom_map(map = states_map,
aes(map_id = state_name,
fill = cases_per_10000)) +
#This assures the map looks decently nice:
expand_limits(x = states_map$long, y = states_map$lat) +
theme_map()+
labs(title = "Most recent number of cases per 10,000 people in each state by color")
These exercises use the datasets MplsStops and MplsDemo from the carData library. Search for them in Help to find out more information.
MplsStops dataset to find out how many stops there were for each neighborhood and the proportion of stops that were for a suspicious vehicle or person. Sort the results from most to least number of stops. Save this as a dataset called mpls_suspicious and display the table.mpls_suspicious<-MplsStops %>%
group_by(neighborhood) %>%
count(problem) %>%
mutate(prop_suspicious= n/sum(n)) %>%
filter(problem == "suspicious")
mpls_suspicious %>%
arrange(desc(n))
leaflet map and the MplsStops dataset to display each of the stops on a map as a small point. Color the points differently depending on whether they were for suspicious vehicle/person or a traffic stop (the problem variable). HINTS: use addCircleMarkers, set stroke = FAlSE, use colorFactor() to create a palette.pal <- colorFactor("viridis",
domain = MplsStops$problem)
leaflet(data = MplsStops) %>%
addProviderTiles(providers$OpenStreetMap) %>%
addCircles(stroke = FALSE,
lng = ~long,
lat = ~lat,
label = ~neighborhood,
weight = 5,
opacity = 1,
color = ~pal(problem))
eval=FALSE. Although it looks like it only links to the .sph file, you need the entire folder of files to create the mpls_nbhd data set. These data contain information about the geometries of the Minneapolis neighborhoods. Using the mpls_nbhd dataset as the base file, join the mpls_suspicious and MplsDemo datasets to it by neighborhood (careful, they are named different things in the different files). Call this new dataset mpls_all.mpls_nbhd <- st_read("Minneapolis_Neighborhoods/Minneapolis_Neighborhoods.shp", quiet = TRUE)
mpls_all<-mpls_nbhd %>%
left_join(mpls_suspicious,
by = c("BDNAME"="neighborhood")) %>%
left_join(MplsDemo,
by = c("BDNAME"="neighborhood"))
leaflet to create a map from the mpls_all data that colors the neighborhoods by prop_suspicious. Display the neighborhood name as you scroll over it. Describe what you observe in the map.pal <- colorNumeric("viridis",
domain = mpls_all$prop_suspicious)
leaflet(mpls_all) %>%
addTiles() %>%
addPolygons(
fillColor = ~pal(prop_suspicious), #fills according to that variable
fillOpacity = 0.7,
highlight = highlightOptions(weight = 5,
color = "black",
fillOpacity = 0.9,
bringToFront = FALSE,
layer),
popup = ~paste(BDNAME)) %>%
addLegend(pal = pal,
values = ~prop_suspicious,
opacity = 0.5,
title = NULL,
position = "bottomright")
Southeast Minneapolis has a higher rate of suspicious stops than anywhere else in the city.
leaflet to create a map of your own choosing. Come up with a question you want to try to answer and use the map to help answer that question. Describe what your map shows.Question: Do neighborhoods with a high household income have less/more police stops than neighborhoods with a lower household income?
pal <- colorNumeric("viridis",
domain = mpls_all$hhIncome)
leaflet(mpls_all) %>%
addTiles() %>%
addPolygons(
fillColor = ~pal(hhIncome), #fills according to that variable
fillOpacity = 0.7,
highlight = highlightOptions(weight = 5,
color = "black",
fillOpacity = 0.9,
bringToFront = FALSE,
layer),
popup = ~paste(BDNAME,": ",
round(n, 2),
sep="")) %>%
addLegend(pal = pal,
values = ~hhIncome,
opacity = 0.5,
title = NULL,
position = "bottomright")
I can see from the graph that there are neighborhoods with lower household income that have a similar number of stops as those with higher household incomes. However, none of the neighborhoods with around 80-100k hhIncome have over 150 stops.
DID YOU REMEMBER TO UNCOMMENT THE OPTIONS AT THE TOP?